1,317 research outputs found

    ELIMINATING THE POSITION SENSOR IN A SWITCHED RELUCTANCE MOTOR DRIVE ACTUATOR APPLICATION

    Get PDF
    The switched reluctance motor (SRM) is receiving attention because of its merits: high operating temperature capability, fault tolerance, inherent shoot-through preventing inverter topology, high power density, high speed operation, and small rotor inertia. Rotor position information plays a critical role in the control of the SRM. Conventionally, separate position sensors, are used to obtain this information. Position sensors add complexity and cost to the control system and reduce its reliability and flexibility. In order to overcome the drawbacks of position sensors, this dissertation proposed and investigated a position sensorless control system that meets the needs of an electric actuator application. It is capable of working from zero to high speeds. In the control system, two different control strategies are proposed, one for low speeds and one for high speeds. Each strategy utilizes a state observer to estimate rotor position and speed and is capable of 4 quadrant operation. In the low speed strategy a Luenberger observer, which has been named the inductance profile demodulator based observer, is used where a pulse voltage is applied to the SRMs idle phases generating triangle shaped phase currents. The amplitude of the phase current is modulated by the SRMs inductance. The current is demodulated and combined with the output of a state observer to produce an error input to the observer so that the observer will track the actual SRM rotor position. The strategy can determine the SRMs rotor position at standstill and low speeds with torques up to rated torque. Another observer, named the simplified flux model based observer, is used for medium and high speeds. In this case, the flux is computed using the measured current and a simplified flux model. The difference between the computed flux and the measured flux generates an error that is input to the observer so that it will track the actual SRM rotor position. Since the speed ranges of the two control stragegies overlap, the final control system is capable of working from zero to high speed by switching between the two observers according to the estimated speed. The stability and performance of the observers are verified with simulation and experiments

    Adaptive Backstepping Controller Design for Stochastic Jump Systems

    Get PDF
    In this technical note, we improve the results in a paper by Shi et al., in which problems of stochastic stability and sliding mode control for a class of linear continuous-time systems with stochastic jumps were considered. However, the system considered is switching stochastically between different subsystems, the dynamics of the jump system can not stay on each sliding surface of subsystems forever, therefore, it is difficult to determine whether the closed-loop system is stochastically stable. In this technical note, the backstepping techniques are adopted to overcome the problem in a paper by Shi et al.. The resulting closed-loop system is bounded in probability. It has been shown that the adaptive control problem for the Markovian jump systems is solvable if a set of coupled linear matrix inequalities (LMIs) have solutions. A numerical example is given to show the potential of the proposed techniques

    Concurrence-Aware Long Short-Term Sub-Memories for Person-Person Action Recognition

    Full text link
    Recently, Long Short-Term Memory (LSTM) has become a popular choice to model individual dynamics for single-person action recognition due to its ability of modeling the temporal information in various ranges of dynamic contexts. However, existing RNN models only focus on capturing the temporal dynamics of the person-person interactions by naively combining the activity dynamics of individuals or modeling them as a whole. This neglects the inter-related dynamics of how person-person interactions change over time. To this end, we propose a novel Concurrence-Aware Long Short-Term Sub-Memories (Co-LSTSM) to model the long-term inter-related dynamics between two interacting people on the bounding boxes covering people. Specifically, for each frame, two sub-memory units store individual motion information, while a concurrent LSTM unit selectively integrates and stores inter-related motion information between interacting people from these two sub-memory units via a new co-memory cell. Experimental results on the BIT and UT datasets show the superiority of Co-LSTSM compared with the state-of-the-art methods

    Centralized Feature Pyramid for Object Detection

    Full text link
    Visual feature pyramid has shown its superiority in both effectiveness and efficiency in a wide range of applications. However, the existing methods exorbitantly concentrate on the inter-layer feature interactions but ignore the intra-layer feature regulations, which are empirically proved beneficial. Although some methods try to learn a compact intra-layer feature representation with the help of the attention mechanism or the vision transformer, they ignore the neglected corner regions that are important for dense prediction tasks. To address this problem, in this paper, we propose a Centralized Feature Pyramid (CFP) for object detection, which is based on a globally explicit centralized feature regulation. Specifically, we first propose a spatial explicit visual center scheme, where a lightweight MLP is used to capture the globally long-range dependencies and a parallel learnable visual center mechanism is used to capture the local corner regions of the input images. Based on this, we then propose a globally centralized regulation for the commonly-used feature pyramid in a top-down fashion, where the explicit visual center information obtained from the deepest intra-layer feature is used to regulate frontal shallow features. Compared to the existing feature pyramids, CFP not only has the ability to capture the global long-range dependencies, but also efficiently obtain an all-round yet discriminative feature representation. Experimental results on the challenging MS-COCO validate that our proposed CFP can achieve the consistent performance gains on the state-of-the-art YOLOv5 and YOLOX object detection baselines.Comment: Code: https://github.com/QY1994-0919/CFPNe

    Coupling Global Context and Local Contents for Weakly-Supervised Semantic Segmentation

    Full text link
    Thanks to the advantages of the friendly annotations and the satisfactory performance, Weakly-Supervised Semantic Segmentation (WSSS) approaches have been extensively studied. Recently, the single-stage WSSS was awakened to alleviate problems of the expensive computational costs and the complicated training procedures in multi-stage WSSS. However, results of such an immature model suffer from problems of \emph{background incompleteness} and \emph{object incompleteness}. We empirically find that they are caused by the insufficiency of the global object context and the lack of the local regional contents, respectively. Under these observations, we propose a single-stage WSSS model with only the image-level class label supervisions, termed as \textbf{W}eakly-\textbf{S}upervised \textbf{F}eature \textbf{C}oupling \textbf{N}etwork (\textbf{WS-FCN}), which can capture the multi-scale context formed from the adjacent feature grids, and encode the fine-grained spatial information from the low-level features into the high-level ones. Specifically, a flexible context aggregation module is proposed to capture the global object context in different granular spaces. Besides, a semantically consistent feature fusion module is proposed in a bottom-up parameter-learnable fashion to aggregate the fine-grained local contents. Based on these two modules, \textbf{WS-FCN} lies in a self-supervised end-to-end training fashion. Extensive experimental results on the challenging PASCAL VOC 2012 and MS COCO 2014 demonstrate the effectiveness and efficiency of \textbf{WS-FCN}, which can achieve state-of-the-art results by 65.02%65.02\% and 64.22%64.22\% mIoU on PASCAL VOC 2012 \emph{val} set and \emph{test} set, 34.12%34.12\% mIoU on MS COCO 2014 \emph{val} set, respectively. The code and weight have been released at:~\href{https://github.com/ChunyanWang1/ws-fcn}{WS-FCN}.Comment: accepted by TNNL
    • …
    corecore